虽然视觉和语言模型在视觉问题回答等任务上表现良好,但在基本的人类常识性推理技能方面,它们会挣扎。在这项工作中,我们介绍了Winogavil:在线游戏,以收集视觉和语言协会(例如,狼人到满月),用作评估最先进模型的动态基准。受欢迎的纸牌游戏代号的启发,Spymaster提供了与几个视觉候选者相关的文本提示,另一个玩家必须识别它们。人类玩家因创建对竞争对手AI模型而具有挑战性的联想而获得了回报,但仍然可以由其他人类玩家解决。我们使用游戏来收集3.5k实例,发现它们对人类的直观(> 90%的Jaccard索引),但对最先进的AI模型充满挑战,其中最佳模型(Vilt)的得分为52% ,成功的位置在视觉上是显着的。我们的分析以及我们从玩家那里收集的反馈表明,收集的关联需要多种推理技能,包括一般知识,常识,抽象等。我们发布数据集,代码和交互式游戏,旨在允许未来的数据收集,可用于开发具有更好关联能力的模型。
translated by 谷歌翻译
A core process in human cognition is analogical mapping: the ability to identify a similar relational structure between different situations. We introduce a novel task, Visual Analogies of Situation Recognition, adapting the classical word-analogy task into the visual domain. Given a triplet of images, the task is to select an image candidate B' that completes the analogy (A to A' is like B to what?). Unlike previous work on visual analogy that focused on simple image transformations, we tackle complex analogies requiring understanding of scenes. We leverage situation recognition annotations and the CLIP model to generate a large set of 500k candidate analogies. Crowdsourced annotations for a sample of the data indicate that humans agree with the dataset label ~80% of the time (chance level 25%). Furthermore, we use human annotations to create a gold-standard dataset of 3,820 validated analogies. Our experiments demonstrate that state-of-the-art models do well when distractors are chosen randomly (~86%), but struggle with carefully chosen distractors (~53%, compared to 90% human accuracy). We hope our dataset will encourage the development of new analogy-making models. Website: https://vasr-dataset.github.io/
translated by 谷歌翻译
Dynamical systems are found in innumerable forms across the physical and biological sciences, yet all these systems fall naturally into universal equivalence classes: conservative or dissipative, stable or unstable, compressible or incompressible. Predicting these classes from data remains an essential open challenge in computational physics at which existing time-series classification methods struggle. Here, we propose, \texttt{phase2vec}, an embedding method that learns high-quality, physically-meaningful representations of 2D dynamical systems without supervision. Our embeddings are produced by a convolutional backbone that extracts geometric features from flow data and minimizes a physically-informed vector field reconstruction loss. In an auxiliary training period, embeddings are optimized so that they robustly encode the equations of unseen data over and above the performance of a per-equation fitting method. The trained architecture can not only predict the equations of unseen data, but also, crucially, learns embeddings that respect the underlying semantics of the embedded physical systems. We validate the quality of learned embeddings investigating the extent to which physical categories of input data can be decoded from embeddings compared to standard blackbox classifiers and state-of-the-art time series classification techniques. We find that our embeddings encode important physical properties of the underlying data, including the stability of fixed points, conservation of energy, and the incompressibility of flows, with greater fidelity than competing methods. We finally apply our embeddings to the analysis of meteorological data, showing we can detect climatically meaningful features. Collectively, our results demonstrate the viability of embedding approaches for the discovery of dynamical features in physical systems.
translated by 谷歌翻译
We present a new pre-trained language model (PLM) for modern Hebrew, termed AlephBERTGimmel, which employs a much larger vocabulary (128K items) than standard Hebrew PLMs before. We perform a contrastive analysis of this model against all previous Hebrew PLMs (mBERT, heBERT, AlephBERT) and assess the effects of larger vocabularies on task performance. Our experiments show that larger vocabularies lead to fewer splits, and that reducing splits is better for model performance, across different tasks. All in all this new model achieves new SOTA on all available Hebrew benchmarks, including Morphological Segmentation, POS Tagging, Full Morphological Analysis, NER, and Sentiment Analysis. Subsequently we advocate for PLMs that are larger not only in terms of number of layers or training data, but also in terms of their vocabulary. We release the new model publicly for unrestricted use.
translated by 谷歌翻译
在过去的几年中,提出了多种基于深神经网络(DNN)的方法,以解决来自未取消采样的“ K-Space”(傅立叶域)数据的挑战性不足的反向问题。然而,反对采集过程中的变化和解剖学分布的不稳定性表明,与其经典的对应物相比,DNN体系结构对相关物理模型的概括不佳。较差的概括有效地排除了DNN适用于临床环境中不足采样的MRI重建。我们通过引入物理培养的DNN体系结构和培训方法来提高DNN方法的泛化MRI重建能力。除了模型体系结构中观察到的数据外,我们的体系结构还编码底面采样掩码,并采用适当的培训方法,该方法使用与各种无底采样掩码生成的数据一起鼓励模型概括了未散布的MRI重建问题。我们通过对公开可用的快速MRI数据集进行了广泛的实验,证明了我们的方法的附加价值。我们的物理提出的方法达到了增强的概括能力,这使得与获得的稳健性和解剖学分布的变化相比,尤其是在病理区域中,与香草DNN方法和DNN进行了显着提高,并在病理区域中进行了显着提高,并且受过培训的DNN训练,并接受了强烈的掩盖掩模的增强。接受训练的模型和代码以复制我们的实验,将在接受后用于研究目的。
translated by 谷歌翻译
我们表明,具有随机性访问的神经网络可以通过扩增胜过确定性网络。我们称此类网络融合的神经网络或CFNN。我们表明,CFNN可以将$ d $维球的指标近似于任意准确性,仅使用2层和$ \ Mathcal {o}(1)$ Neurrons,其中显示了2层确定性网络所需的$ \ \欧米茄(E^d)$神经元,指数改进(ARXIV:1610.09887 [CS.LG])。我们证明了一个高度不平凡的结果,即对于几乎任何分类问题,都存在一个简单的网络,可以解决该网络权重的足够强大的发电机。结合了这些结果,我们猜测,对于大多数分类问题,有一个CFNN可以比任何确定性网络更高的精度或更少的神经元解决。最后,我们使用CIFAR10和CIFAR100上的新型CFNN体系结构实验验证了我们的证明,从基线提高了9.25 \%。
translated by 谷歌翻译
基于文本的对抗攻击变得越来越普遍,通用互联网用户可以访问。随着这些攻击的繁殖,解决模型鲁棒性中差距的需求即将变得迫在眉睫。在对抗数据上进行重新培训可能会提高性能,但这些模型在该模型中仍有一类其他角色级攻击。此外,重新培训模型的过程是时间和资源密集型,创造了对轻巧,可重复使用的防御的需求。在这项工作中,我们提出了对抗性文本标准器,这是一种新颖的方法,可恢复具有低计算开销的攻击内容上的基线性能。我们评估了标准级化合物在容易发生攻击的两个问题领域的功效,即仇恨言论和自然语言推断。我们发现,文本归一化提供了针对角色级攻击的任务不足的防御,该攻击可以对对抗性再培训解决方案进行补充,这更适合语义改变。
translated by 谷歌翻译
本文介绍了一个新颖的数据集,以帮助研究人员评估他们的计算机视觉和音频模型,以便在各种年龄,性别,表观肤色和环境照明条件下进行准确性。我们的数据集由3,011名受试者组成,并包含超过45,000个视频,平均每人15个视频。这些视频被录制在多个美国国家,各种成年人在各种年龄,性别和明显的肤色群体中。一个关键特征是每个主题同意参与他们使用的相似之处。此外,我们的年龄和性别诠释由受试者自己提供。一组训练有素的注释器使用FitzPatrick皮肤型刻度标记了受试者的表观肤色。此外,还提供了在低环境照明中记录的视频的注释。作为衡量某些属性的预测稳健性的申请,我们对DeepFake检测挑战(DFDC)的前五名获胜者提供了全面的研究。实验评估表明,获胜模型对某些特定人群的表现较小,例如肤色较深的肤色,因此可能对所有人都不概括。此外,我们还评估了最先进的明显年龄和性别分类方法。我们的实验在各种背景的人们的公平待遇方面对这些模型进行了彻底的分析。
translated by 谷歌翻译
Figure 1. The proposed pixel2style2pixel framework can be used to solve a wide variety of image-to-image translation tasks. Here we show results of pSp on StyleGAN inversion, multi-modal conditional image synthesis, facial frontalization, inpainting and super-resolution.
translated by 谷歌翻译